Test Plan:

A TEST PLAN is a dynamic document describing software testing scope and activities.
It is the basis for formally testing any software/product in a project.

 	OR

A test plan is a document describing the scope, approach, objectives, resources, schedule etc of a software testing effort. It identifies the items to be tested, items not be tested, resource, the test approach followed, training needs for team, the testing schedule etc.


Contents of Testplan:

1. Objectives
2. Scope
3. Approach
4. Testing Methodologies
5. Deliverables
6. Assumptions
7. Risk
8. Contingent OR Mitigation plan
9. Entry & Exit Criteria
10. Test Environment
11. Defects
12. Templates
13. Schedules



1. Objectives
    It describes the purpose of creating the test plan doc.
---------------------------------
2. Scope:
   it describes the testing scope for the project.
   It is 2 types:
   (a) Features to be tested.
   (b) Features not to be tested.

(a) Features to be tested:
   List of features tested as part of functional testing:
   1.
   2.
   3.

   List of features tested as part of Integration testing:
   1.
   2.
   3.

   List of features tested as part of System testing:
   1.
   2.
   3.


(b) Features not to be tested.
    Except for the above mentioned requirements, no other requirements will be considered for testing.

	OR
	
 1. The third party features are not in testing scope.
 2. The feature/functionality which requires PROD setup will not be tested offshore.
----------------------------------

3. Approach:
   The approach which we follow to test the s/w.
   It is of 2 types:
   (1) By writing high level scenarios
   (2) By writing flow graphs
   
   For the above approaches we can use:
   (c) Manual Testing
   (d) Automation Testing
   (e) Manual + Automation
-------------------------------------
4. Testing Methodologies:
   What are the types of testings which we are going to perform on the applications.
   (a) Functional testing
   (b) UAT 
   (c) Compatibility
   (d) Integration
   (e) Performance
-----------------------------------
5. Deliverables:
   Any artifacts which will be shared with the customer is k.a., delivarables.
The common delivarables would be:
  (a) Software (which consist of source code(.jar file), pages OR jsp (.war) and DB tables (.sql) files)
  (b) Release Note doc
  (c) Installation Guide Doc
  (d) Consolidated Test Execution Report
  (e) Consolidated Defect Report
--------------------------------------
6. Assumptions
7. Risk
8. Contingent OR Mitigation plan
    All these 3 points are inter-related.

Ex: 1   
    *Assumption is that the 3 modules will be tested by 3 Testers.
    *If any resource quits the job then there is a risk.
    *Mitigation plan is that:
 - M1 is a primary module for Tester1 & secondary module to Tester2
 - M2 is a primary module for Tester2 & secondary module to Tester3
 - M3 is a primary module for Tester3 & secondary module to Tester1



Ex: 2
   * Assumption is that the testdata OR DB dump will be given by customer for system testing.
   * if customer doesnot provide the tets data then there is a risk.

Mitigation plan is that:
 - Ask customer to train the resource on test data preparation
 - K.T (Knowledge Transfer) should be provided to the offshore team members
 - Customer should Provide POC (Point Of Contact) for the offshore team members for test data related queries.
-----------------------------------------
9. Entry & Exit Criteria
   When to start & when to stop the testings is termed as Entry & Exit Criteria.

(1a)Entry criteria for functional Testing:
1. Build should be ready with unit test results from dev.
2. Test Bed should be ready & Build should be deployed for testing.
3. Smoke testing should be completed with +ve results.
4. Functional test cases should be written, reviewed and approved.
5. Resource should be available.

(1b) Exit criteria for functional testing:
1. All the selected test cases should be executed and passed
              OR
2. All the High priority test cases are executed and passed.
3. All the blocker & Critical defects should be fixed and validated.


(2a) Entry criteria for Integration Testing:
1. The exit criteria of functional testing should be met.
2. The build should be deployed in test environment.
3. The build should be ready with atleast 3-4 interfaces are implemented along with Unit test reports from dev.
4. Integration test cases are written, reviewed and approved.
5. Resources should be available.

(2b) Exit criteria for Integration testing:
1. All the selected Integration test cases should be executed and passed.
        OR 
2. All the High & medium test cases should be executed & Passed
3. All the blocker, critical & medium defects should be fixed and validated.


(3a) Entry criteria for system testing:
1. Exit criteria of integration testing should be met.
2. Build should be ready with atleast 5-6 end to end functionalities are implemented.
3. Test data OR DB dump from the customer should be ready
4. System test cases are written, reviewed and approved.
5. Resources should be available.


(3b) Exit criteria for System Testing:
1. All the selected test cases are executed and passed
2. All the defects should be fixed and validated.
------------------------------------

10. Test Environment:
   A software/build consist of 3 segments (Web applications)
  (a) source code/business logic: (in the form of .jar file)
  (b) Pages/jsp (in the form of .war file)
  (c) DB tables (in the form of .sql file)

So to install the s/w:
we need to have appServer (Weblogic OR Jboss) for deploying .jar file. 
We need to have webserver (Apache Tomcat) for deploying .war files. 
We need DB server (Oracle, DB2, SQL Server, Mongo DB, Timber etc) to run the .sql files.
  
   All these servers are connected to each other in a systemmatic manner. By this we can run & access the s/w.


Q: Explain application Architecture?
Ans: Explain 'n' tier architecture



Q: What is test Bed OR Test Environment OR Test Suite?
Ans: Its server OR set of servers in which the s/w is deployed for testing purpose is k.a., Test environment.


Q: Who creates a test Environment?
Ans: Dev/QA/Build Engineer/Deployment Engineer can create a test bed.
** In interview preferred to say that "in our organization we have Build/DevOps Engineer".


Q: Who will deploy the s/w?
Ans: Dev/QA/Deployment Eng/Build Engineer can deploy the s/w.
** In interview preferred to say that "in our organization we have Build/DevOps Engineer".
------------------------------------------

11. Defects: 
     This section contains following:
(a) Which defect tracking tool will be used in the project
(b) Defect Severity & Its levels
(c) Defect Priority & Its levels


(a) Defect Tracking Tool: There are so many defect tracking tools are available in the market. The globally known tools are Bugzilla, JIRA, ALM, TFS, Mantis, DevTrack, SpiraTest etc.

(b) Defect Severity & Its levels:
    Defect Severity: How badly the functionality is affected is k.a., Defect Severity.
Severity Levels:
  (a) Blocker/Show stopper
  (b) Critical
  (c) Major
  (d) minor
  (e) Cosmotic
  (f) Trivial
  (g) Enhancement


(c) Defect Priority & Its levels:
    Defect Priority: How fast the defect should be get it fixed is k.a., Defect Priority.
Priority Levels:
  (a) V. High
  (b) High
  (c) Medium
  (e) Low
  (f) V. Low


Q: Who will define the severity & priority while logging the defect?
Ans: In our organization the QA who logs the defect will assign the Severity & Priority.


Q: To whom you assign the defect?
Ans: In our organization we will assign the defects to the respective dev.
-------------------------------------
12. Templates:
     It is a company std format used to create/write QA related documents. (Ex: Test case templates, Defect template, RTM templates etc)
	 
            OR
			
A system that helps you arrange information on a computer screen.

All the Test case templates, Defect template, RTM templates etc are attched in the Test plan doc. OR atleast they will mention the location in the server where all the templates are stored.
-----------------------------------
13. Schedules:
    A tentative estimation/deadlines for all the project related tasks.
Ex:
By Jan 2022: 
    01st - 05th: understand all the requirements
    06th - 10th: Write functional (System, Integration & Functional) test case.





===============================================================
Q: Who creates the Test plan?
Ans: QA should prepare the test plan especially Test Lead/Sr.QA/Manager.


Q: Have you involved in Test plan Creation?
Ans: Yes. Along with my Test Lead/ QA manager I have also involved in test plan creation.
EX: Updated Test Environment section, defect section etc


Q: Do we need to create Test plan for every Sprint?
Ans: Yes. As every sprint will have different scope so need separate test plan for each sprint.


Q: Does Dev can prepare test plan?
Ans: No. Its created by QA only.


Q: What are the different types of test plan?
Ans:
There are three types of Test Plans: 
	1) Master Test Plan
	2) Testing Level Specific Test Plan
	3) Testing Type Specific Test Plan.
	
	
Q: What are the risks that should be avoided for a testing project?
Ans. One should avoid the following risks during a testing project: 
	1) human resource risk (resource crunch)
	2) project schedule risk (missed deadlines)
	3) strategy risk (exceeding allocated budgets)
	4) project definition risk.
	
	

Q: What are the informal reviews? Do you document informal reviews?
Ans:
An informal review is a process of checking defects without running the code. No, informal reviews do not require documentation.



Q: What is Software Test Estimation?
Ans: 
Test Estimation is a management activity which approximates how long a Task would take to complete. Estimating effort for the test is one of the major and important tasks in Test Management.



Q: What are the estimation techniques avaialble?
Ans:
	(a) Work Breakdown Structure
	(b) 3-Point Software Testing Estimation Technique
	(c) Delphi technique
	(d) Function Point/Testing Point Analysis
	(e) Use – Case Point Method
	(f) Percentage distribution
	(g) Ad-hoc method




Q: What is a three-point estimation?
Ans. In a three-point estimation, three different values are calculated based on previous experience. These are:
	1) the best-case estimate
	2) the most likely estimate.
	3) the worst-case estimate.
	



Q: How can you determine the quality of the test execution?
Ans:
You can determine the quality of test execution by: 

Defect rejection ratio: (No. of defects rejected/ total no. of defects raised) X 100
Defect leakage ratio: (No. of defect missed/total defects of software) X 100

A smaller value of DRR and DLR indicates a better quality of test execution


